Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 14.944
Filtrar
2.
Sensors (Basel) ; 24(5)2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38475138

RESUMO

The approach of using more than one processor to compute in order to overcome the complexity of different medical imaging methods that make up an overall job is known as GPU (graphic processing unit)-based parallel processing. It is extremely important for several medical imaging techniques such as image classification, object detection, image segmentation, registration, and content-based image retrieval, since the GPU-based parallel processing approach allows for time-efficient computation by a software, allowing multiple computations to be completed at once. On the other hand, a non-invasive imaging technology that may depict the shape of an anatomy and the biological advancements of the human body is known as magnetic resonance imaging (MRI). Implementing GPU-based parallel processing approaches in brain MRI analysis with medical imaging techniques might be helpful in achieving immediate and timely image capture. Therefore, this extended review (the extension of the IWBBIO2023 conference paper) offers a thorough overview of the literature with an emphasis on the expanding use of GPU-based parallel processing methods for the medical analysis of brain MRIs with the imaging techniques mentioned above, given the need for quicker computation to acquire early and real-time feedback in medicine. Between 2019 and 2023, we examined the articles in the literature matrix that include the tasks, techniques, MRI sequences, and processing results. As a result, the methods discussed in this review demonstrate the advancements achieved until now in minimizing computing runtime as well as the obstacles and problems still to be solved in the future.


Assuntos
Algoritmos , Gráficos por Computador , Humanos , Software , Encéfalo , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos
5.
IEEE Trans Vis Comput Graph ; 30(5): 2247-2256, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437075

RESUMO

Physical QWERTY keyboards are the current standard for performing precision text-entry with extended reality devices. Ideally, there would exist a comparable, self-contained solution that works anywhere, without requiring external keyboards. Unfortunately, when physical keyboards are recreated virtually, we currently lose critical haptic feedback information from the sense of touch, which impedes typing. In this paper, we introduce the MusiKeys Technique, which uses auditory feedback in virtual reality to communicate missing haptic feedback information typists normally receive when using a physical keyboard. To examine this concept, we conducted a user study with 24 participants which encompassed four mid-air virtual keyboards augmented with increasing amounts of feedback information, along with a fifth physical keyboard for reference. Results suggest that providing clicking feedback on key-press and key-release improves typing performance compared to not providing auditory feedback, which is consistent with the literature. We also found that audio can serve as a substitute for information contained in haptic feedback, in that users can accurately perceive the presented information. However, under our specific study conditions, this awareness of the feedback information did not yield significant differences in typing performance. Our results suggest this kind of feedback replacement can be perceived by users but needs more research to tune and improve the specific techniques.


Assuntos
Tecnologia Háptica , Percepção do Tato , Humanos , Desenho de Equipamento , Gráficos por Computador , Tato , Interface Usuário-Computador
6.
IEEE Trans Vis Comput Graph ; 30(5): 2077-2086, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437077

RESUMO

Eye tracking has shown great promise in many scientific fields and daily applications, ranging from the early detection of mental health disorders to foveated rendering in virtual reality (VR). These applications all call for a robust system for high-frequency near-eye movement sensing and analysis in high precision, which cannot be guaranteed by the existing eye tracking solutions with CCD/CMOS cameras. To bridge the gap, in this paper, we propose Swift-Eye, an offline precise and robust pupil estimation and tracking framework to support high-frequency near-eye movement analysis, especially when the pupil region is partially occluded. Swift-Eye is built upon the emerging event cameras to capture the high-speed movement of eyes in high temporal resolution. Then, a series of bespoke components are designed to generate high-quality near-eye movement video at a high frame rate over kilohertz and deal with the occlusion over the pupil caused by involuntary eye blinks. According to our extensive evaluations on EV-Eye, a large-scale public dataset for eye tracking using event cameras, Swift-Eye shows high robustness against significant occlusion. It can improve the IoU and F1-score of the pupil estimation by 20% and 12.5% respectively, compared with the second-best competing approach, when over 80% of the pupil region is occluded by the eyelid. Lastly, it provides continuous and smooth traces of pupils in extremely high temporal resolution and can support high-frequency eye movement analysis and a number of potential applications, such as mental health diagnosis, behaviour-brain association, etc. The implementation details and source codes can be found at https://github.com/ztysdu/Swift-Eye.


Assuntos
Algoritmos , Movimentos Oculares , Gráficos por Computador , Piscadela , Pupila
7.
IEEE Trans Vis Comput Graph ; 30(5): 2776-2784, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437079

RESUMO

Asynchronous digital communication is a widely applied and well-known form of information exchange. Most pieces of technology make use of some variation of asynchronous communication systems, be it messaging or email applications. This allows recipients to process digital messages immediately (synchronous) or whenever they have time (asynchronous), meaning that purely digital interruptions can be mitigated easily. Mixed Reality systems have the potential to not only handle digital interruptions but also interruptions in physical space, e.g., caused by co-workers in workspaces or learning environments. However, the benefits of such systems previously remained untested in the context of Mixed Reality. We conducted a user study ($\mathrm{N}=26$) to investigate the impact that the timing of task delivery has on the participants' performance, workflow, and emotional state. Participants had to perform several cognitively demanding tasks in a Mixed Reality workspace. Inside the virtual workspace, we simulated in-person task delivery either during tasks (i.e., interrupting the participant) or between tasks (i.e., delaying the interruption). Our results show that delaying interruptions has a significant impact on subjective metrics like the perceived performance and workload.


Assuntos
Realidade Aumentada , Humanos , Gráficos por Computador , Carga de Trabalho/psicologia , Comunicação , Aprendizagem , Análise e Desempenho de Tarefas
8.
IEEE Trans Vis Comput Graph ; 30(5): 2206-2216, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437082

RESUMO

In Mixed Reality (MR), users' heads are largely (if not completely) occluded by the MR Head-Mounted Display (HMD) they are wearing. As a consequence, one cannot see their facial expressions and other communication cues when interacting locally. In this paper, we investigate how displaying virtual avatars' heads on-top of the (HMD-occluded) heads of participants in a Video See-Through (VST) Mixed Reality local collaborative task could improve their collaboration as well as social presence. We hypothesized that virtual heads would convey more communicative cues (such as eye direction or facial expressions) hidden by the MR HMDs and lead to better collaboration and social presence. To do so, we conducted a between-subject study ($\mathrm{n}=88$) with two independent variables: the type of avatar (CartoonAvatar/RealisticAvatar/NoAvatar) and the level of facial expressions provided (HighExpr/LowExpr). The experiment involved two dyadic communication tasks: (i) the "20-question" game where one participant asks questions to guess a hidden word known by the other participant and (ii) a urban planning problem where participants have to solve a puzzle by collaborating. Each pair of participants performed both tasks using a specific type of avatar and facial animation. Our results indicate that while adding an avatar's head does not necessarily improve social presence, the amount of facial expressions provided through the social interaction does have an impact. Moreover, participants rated their performance higher when observing a realistic avatar but rated the cartoon avatars as less uncanny. Taken together, our results contribute to a better understanding of the role of partial avatars in local MR collaboration and pave the way for further research exploring collaboration in different scenarios, with different avatar types or MR setups.


Assuntos
Realidade Aumentada , 60453 , Humanos , Interface Usuário-Computador , Gráficos por Computador , Expressão Facial
9.
IEEE Trans Vis Comput Graph ; 30(5): 2485-2495, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437085

RESUMO

We present a systematic review of 111 papers that measure the impact of virtual experiences created through head-mounted displays (HMDs) on empathy. Our goal was to analyze the conditions and the extent to which virtual reality (VR) enhances empathy. To achieve this, we categorized the relevant literature according to measurement methods, correlated human factors, viewing experiences, topics, and participants. Meta-analysis was performed based on categorized themes, and under specified conditions, we found that VR can improve empathy. Emotional empathy increased temporarily after the VR experience and returned to its original level over time, whereas cognitive empathy remained enhanced. Furthermore, while VR did not surpass 2D video in improving emotional empathy, it did enhance cognitive empathy, which is associated with embodiment. Our results are consistent with existing research suggesting differentiation between cognitive empathy (influenced by environmental factors and learnable) and emotional empathy (highly heritable and less variable). Interactivity, target of empathy, and point of view were not found to significantly affect empathy, but participants' age and nationality were found to influence empathy levels. It can be concluded that VR enhances cognitive empathy by immersing individuals in the perspective of others and that storytelling and personal characteristics are more important than the composition of the VR scene. Our findings provide guiding information for creating empathy content in VR and designing experiments to measure empathy.


Assuntos
Óculos Inteligentes , Realidade Virtual , Humanos , Empatia , Gráficos por Computador , Emoções
10.
IEEE Trans Vis Comput Graph ; 30(5): 2570-2579, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437086

RESUMO

We provide the first perceptual quantification of user's sensitivity to radial optic flow artifacts and demonstrate a promising approach for masking this optic flow artifact via blink suppression. Near-eye HMOs allow users to feel immersed in virtual environments by providing visual cues, like motion parallax and stereoscopy, that mimic how we view the physical world. However, these systems exhibit a variety of perceptual artifacts that can limit their usability and the user's sense of presence in VR. One well-known artifact is the vergence-accommodation conflict (VAC). Varifocal displays can mitigate VAC, but bring with them other artifacts such as a change in virtual image size (radial optic flow) when the focal plane changes. We conducted a set of psychophysical studies to measure users' ability to perceive this radial flow artifact before, during, and after self-initiated blinks. Our results showed that visual sensitivity was reduced by a factor of 10 at the start and for ~70 ms after a blink was detected. Pre- and post-blink sensitivity was, on average, ~O.15% image size change during normal viewing and increased to ~1.5- 2.0% during blinks. Our results imply that a rapid (under 70 ms) radial optic flow distortion can go unnoticed during a blink. Furthermore, our results provide empirical data that can be used to inform engineering requirements for both hardware design and software-based graphical correction algorithms for future varifocal near-eye displays. Our project website is available at https://gamma.umd.edu/ROF/.


Assuntos
Fluxo Óptico , Gráficos por Computador , Acomodação Ocular , Algoritmos , Software
11.
IEEE Trans Vis Comput Graph ; 30(5): 2671-2681, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437090

RESUMO

There is little research on how Virtual Reality (VR) applications can identify and respond meaningfully to users' emotional changes. In this paper, we investigate the impact of Context-Aware Empathic VR (CAEVR) on the emotional and cognitive aspects of user experience in VR. We developed a real-time emotion prediction model using electroencephalography (EEG), electrodermal activity (EDA), and heart rate variability (HRV) and used this in personalized and generalized models for emotion recognition. We then explored the application of this model in a context-aware empathic (CAE) virtual agent and an emotion-adaptive (EA) VR environment. We found a significant increase in positive emotions, cognitive load, and empathy toward the CAE agent, suggesting the potential of CAEVR environments to refine user-agent interactions. We identify lessons learned from this study and directions for future work.


Assuntos
Empatia , Realidade Virtual , Gráficos por Computador , Emoções/fisiologia , Conscientização
12.
IEEE Trans Vis Comput Graph ; 30(5): 2745-2755, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437100

RESUMO

Active exploration in virtual reality (VR) involves users navigating immersive virtual environments, going from one place to another. While navigating, users often engage in secondary tasks that require attentional resources, as in the case of distracted driving. Inspired by research generally studying the effects of task demands on cybersickness (CS), we investigated how the attentional demands specifically associated with secondary tasks performed during exploration affect CS. Downstream of this, we studied how increased attentional demands from secondary tasks affect spatial memory and navigational performance. We discuss the results of a multi-factorial between-subjects study, manipulating a secondary task's demand across two levels and studying its effects on CS in two different sickness-inducing levels of an exploration experience. The secondary task's demand was manipulated by parametrically varying $n$ in an aural $n$-back working memory task and the provocativeness of the experience was manipulated by varying how frequently users experienced a yaw-rotational reorientation effect during the exploration. Results revealed that increases in the secondary task's demand increased sickness levels, also resulting in a higher temporal onset rate, especially when the experience was not already highly sickening. Increased attentional demand from the secondary task also vitiated navigational performance and spatial memory. Overall, increased demands from secondary tasks performed during navigation produce deleterious effects on the VR experience.


Assuntos
Gráficos por Computador , Realidade Virtual , Humanos , Análise e Desempenho de Tarefas , Atenção
13.
IEEE Trans Vis Comput Graph ; 30(5): 2400-2410, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437088

RESUMO

A prerequisite to improving the presence of a user in mixed reality (MR) is the ability to measure and quantify presence. Traditionally, subjective questionnaires have been used to assess the level of presence. However, recent studies have shown that presence is correlated with objective and systemic human performance measures such as reaction time. These studies analyze the correlation between presence and reaction time when technical factors such as object realism and plausibility of the object's behavior change. However, additional psychological and physiological human factors can also impact presence. It is unclear if presence can be mapped to and correlated with reaction time when human factors such as conditioning are involved. To answer this question, we conducted an exploratory study ($N=60$) where the relationship between presence and reaction time was assessed under three different conditioning scenarios: control, positive, and negative. We demonstrated that human factors impact presence. We found that presence scores and reaction times are significantly correlated (correlation coefficient of -0.64), suggesting that the impact of human factors on reaction time correlates with its effect on presence. In demonstrating that, our study takes another important step toward using objective and systemic measures like reaction time as a presence measure.


Assuntos
Realidade Aumentada , Humanos , Tempo de Reação , Gráficos por Computador , Inquéritos e Questionários
14.
IEEE Trans Vis Comput Graph ; 30(5): 2379-2389, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437101

RESUMO

In this research paper, we conducted a study to investigate the connection between three objective measures: Electrocardio-gram(EGG), Electrogastrogram (EGG), and Electroencephalogram (EEG), and individuals' susceptibility to cybersickness. Our primary objective was to identify which of these factors plays a central role in causing discomfort when experiencing rotations along three different axes: Roll, Pitch, and Yaw. This study involved 35 participants who were tasked with destroying asteroids using their eye gaze while undergoing passive rotations in four separate sessions. The results, when combined with subjective measurements (specifically, Fast motion sickness questionnaire (FMS) and Simulator sickness questionnaire (SSQ) score), demonstrated that EGG measurements were superior in detecting symptoms associated with nausea. As for ECG measurements, our observations did reveal significant changes in Heart Rate Variability (HRV) parameters. However, we caution against relying solely on ECG as a dependable indicator for assessing the extent of cybersickness. Most notably, EEG signals emerged as a crucial resource for discerning individual differences related to these rotational axes. Our findings were significant not only in the context of periodic activities but also underscored the potential of aperiodic activities in detecting the severity of cybersickness and an individual's susceptibility to rotational triggers.


Assuntos
Gráficos por Computador , Enjoo devido ao Movimento , Humanos , Fixação Ocular , Eletroencefalografia , Enjoo devido ao Movimento/diagnóstico , Eletrocardiografia/efeitos adversos
15.
IEEE Trans Vis Comput Graph ; 30(5): 2390-2399, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437102

RESUMO

We present Virtual Reality Self Co-embodiment, a new method for post-stroke upper limb rehabilitation. It is inspired by mirror therapy, where the patient's healthy arm is involved in recovering the affected arm's motion. By tracking the user's head, wrists, and fingers' positions, our new approach allows the handicapped arm to control a digital avatar in order to pursue a reaching task. We apply the concept of virtual co-embodiment to use the information from the unaffected arm and complete the affected limb's impaired motion, which is our added unique feature. This requires users to mechanically involve the incapacitated area as much as possible, prioritizing actual movement rather than the sole imagination of it. As a result, subjects will see a seemingly normally functional virtual arm primarily controlled by their handicapped extremity, but with the constant support of their healthy limb's motion. Our experiment compares the task execution performance and embodiment perceived when interacting with both mirror therapy and our proposed technique. We found that our approach's provided sense of ownership is mildly impacted by users' motion planning response times, which mirror therapy does not exhibit. We also observed that mirror therapy's sense of ownership is moderately affected by the subject's proficiency while executing the assigned task, which our new method did not display. The results indicate that our proposed method provides similar embodiment and rehabilitation capabilities to those perceived from existing mirror therapy. This experiment was performed in healthy individuals to have an unbiased comparison of how mirror therapy's and VRSelfCo's task performance and degree of virtual embodiment compare, but future work explores the possibility of applying this new approach to actual post-stroke patients.


Assuntos
Reabilitação do Acidente Vascular Cerebral , Realidade Virtual , Humanos , Terapia de Espelho de Movimento , Gráficos por Computador , Reabilitação do Acidente Vascular Cerebral/métodos , Extremidade Superior/fisiologia
16.
IEEE Trans Vis Comput Graph ; 30(5): 2693-2702, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437103

RESUMO

Redirected walking (RDW) facilitates user navigation within expansive virtual spaces despite the constraints of limited physical spaces. It employs discrepancies between human visual-proprioceptive sensations, known as gains, to enable the remapping of virtual and physical environments. In this paper, we explore how to apply rotation gain while the user is walking. We propose to apply a rotation gain to let the user rotate by a different angle when reciprocating from a previous head rotation, to achieve the aim of steering the user to a desired direction. To apply the gains imperceptibly based on such a Bidirectional Rotation gain Difference (BiRD), we conduct both measurement and verification experiments on the detection thresholds of the rotation gain for reciprocating head rotations during walking. Unlike previous rotation gains which are measured when users are turning around in place (standing or sitting), BiRD is measured during users' walking. Our study offers a critical assessment of the acceptable range of rotational mapping differences for different rotational orientations across the user's walking experience, contributing to an effective tool for redirecting users in virtual environments.


Assuntos
Gráficos por Computador , Caminhada , Humanos , Animais , Orientação , Meio Ambiente , Aves
17.
IEEE Trans Vis Comput Graph ; 30(5): 2591-2601, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437092

RESUMO

Autism Spectrum Disorder is a neurodevelopmental condition that can affect autonomy and independence. Our research explores the integration of Cross-Reality and Conversational Agents for Autistic persons to improve ability and confidence in everyday life situations. We combine two technologies of the Virtual-Real continuum. User experiences unfold from the simulation of tasks in VR to the execution of similar tasks supported by AR in the real world. A speech-based Conversational Agent is integrated with both VR and AR. It provides contextualized help, promotes generalization, and stimulates users to apply what they learned in the virtual space. The paper presents the approach and describes an empirical study involving 17 young Autistic persons.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Humanos , Gráficos por Computador , Aprendizagem , Poder Psicológico
18.
IEEE Trans Vis Comput Graph ; 30(5): 2559-2569, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437107

RESUMO

For VR interaction, the home environment with complicated spatial setup and dynamics may hinder the VR user experience. In particular, pets' movement may be more unpredictable. In this paper, we investigate the integration of real-world pet activities into immersive VR interaction. Our pilot study showed that the active pet movements, especially dogs, could negatively impact users' performance and experience in immersive VR. We proposed three different types of pet integration, namely semitransparent real-world portal, non-interactive object in VR, and interactive object in VR. We conducted the user study with 16 pet owners and their pets. The results showed that compared to the baseline condition without any pet-integration technique, the approach of integrating the pet as interactive objects in VR yielded significantly higher participant ratings in perceived realism, joy, multisensory engagement, and connection with their pets in VR.


Assuntos
Gráficos por Computador , Realidade Virtual , Humanos , Animais , Cães , Projetos Piloto , Movimento
19.
IEEE Trans Vis Comput Graph ; 30(5): 2580-2590, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437094

RESUMO

VR exergames offer an engaging solution to combat sedentary behavior and promote physical activity. However, challenges emerge when playing these games in shared spaces, particularly due to the presence of bystanders. VR's passthrough functionality enables players to maintain awareness of their surrounding environment while immersed in VR gaming, rendering it a promising solution to improve users' awareness of the environment. This study investigates the passthrough's impact on player performance and experiences in shared spaces, involving an experiment with 24 participants that examines Space (Office vs. Corridor) and Passthrough Function (With vs. Without). Results reveal that Passthrough enhances game performance and environmental awareness while reducing immersion. Players prefer an open area to an enclosed room, whether with or without Passthrough, finding it more socially acceptable. Additionally, Passthrough appears to encourage participation among players with higher self-consciousness, potentially alleviating their concerns about being observed by bystanders. Our findings provide valuable insights for designing VR experiences in shared spaces, underscoring the potential of VR's passthrough to enhance user experiences and promote VR adoption in these environments.


Assuntos
Jogos Eletrônicos de Movimento , Realidade Virtual , Humanos , Interface Usuário-Computador , Gráficos por Computador , Exercício Físico
20.
IEEE Trans Vis Comput Graph ; 30(5): 2087-2097, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437111

RESUMO

From education to medicine to entertainment, a wide range of industrial and academic fields now utilize eXtended Reality (XR) technologies. This diversity and growing use are boosting research and leading to an increasing number of XR experiments involving human subjects. The main aim of these studies is to understand the user experience in the broadest sense, such as the user cognitive and emotional states. Behavioral data collected during XR experiments, such as user movements, gaze, actions, and physiological signals constitute precious assets for analyzing and understanding the user experience. While they contribute to overcome the intrinsic flaws of explicit data such as post-experiment questionnaires, the required acquisition and analysis tools are costly and challenging to develop, especially for 6DoF (Degrees of Freedom) XR experiments. Moreover, there is no common format for XR behavioral data, which restrains data-sharing, and thus hinders wide usages across the community, replicability of studies, and the constitution of large datasets or meta-analysis. In this context, we present PLUME, an open-source software toolbox (PLUME Recorder, PLUME Viewer, PLUME Python) that allows for the exhaustive record of XR behavioral data (including synchronous physiological signals), their offline interactive replay and analysis (with a standalone application), and their easy sharing due to our compact and interoperable data format. We believe that PLUME can greatly benefit the scientific community by making the use of behavioral and physiological data available for the greatest, contributing to the reproducibility and replicability of XR user studies, enabling the creation of large datasets, and contributing to a deeper understanding of user experience.


Assuntos
Gráficos por Computador , Software , Humanos , Reprodutibilidade dos Testes , Disseminação de Informação , Inquéritos e Questionários
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA